Skip to the content.

The SignforAll 2025 Challenge

The SignforAll 2025 will be featured in the IEEE International Workshop on Machine Learning for Signal Processing (MLSP) 2025. The shared task will be part of the Sign Language Translation in the era of Large Language Model - Beyond English special session This special session addresses sign language translation challenges at the intersection of AI and accessibility, introducing a novel dataset co-developed with sign language users and interpreters. The session will provide a setup to discuss the generalization capabilities of sign language translation models, addressing a fundamental limitation in the field.

The Saudi Sign Language Shared Task

The Saudi Sign Language (SSL) aims to advance sign language recognition by addressing key challenges, including recognizing signs from unseen signers, unseen sentences, and conditions involving face coverings in the SSL dataset.

Task Description

Participants will develop models to translate SSL signs into text. There are three tracks:

BLEU score is the accepted evaluation metric, using the SacreBLEU library.

Base Line Results:

The results from training the model on T5 variant models on the pose features are shown in the table below:

Styled Table
Test-1 Test-2 Test-3
Models Unseen Signers - Unseen Sentences Seen Signers - Unseen Sentences Unseen Signers - Seen Sentences
BLEU-1 BLEU-2 BLEU-3 BLEU-4 BLEU-1 BLEU-2 BLEU-3 BLEU-4 BLEU-1 BLEU-2 BLEU-3 BLEU-4
T5-Base 24.05 9.59 4.96 2.73 24.46 9.14 4.53 2.01 84.07 81.09 80.59 80.37
T5v1.1-Base 26.16 9.25 4.39 1.59 26.87 11.25 6.05 2.78 88.46 86.27 85.84 85.75
mT5-Base (English) 23.63 8.32 3.98 1.46 26.72 10.53 5.42 2.79 87.76 85.62 85.22 85.16
mT5-Base (Arabic) 10.84 2.99 1.28 0.72 13.3 4.28 1.64 0.66 85.54 84.27 83.99 83.87
The results show that fine-tuning on pretrained checkpoints significantly improves model performance, as indicated by higher BLEU scores across all test sets. The improvement suggests that incorporating domain-specific data during pre-training helps enhance the model's generalization ability. Models fine-tuned on pretrained checkpoints perform better, especially on unseen data, highlighting the benefits of such an approach. For a more detailed comparison, please refer to the baseline paper.
Test-1 Test-2 Test-3
Models Unseen Signers - Unseen Sentences Seen Signers - Unseen Sentences Unseen Signers - Seen Sentences
BLEU-1 BLEU-2 BLEU-3 BLEU-4 BLEU-1 BLEU-2 BLEU-3 BLEU-4 BLEU-1 BLEU-2 BLEU-3 BLEU-4
T5-Base 35.89 17.33 11.14 7.48 35.78 17.53 10.34 5.72 95.17 94.02 93.78 93.67
T5v1.1-Base 34.76 16.79 10.05 5.56 35.59 16.96 9.92 5.23 94.5 93.16 92.83 92.58
mT5-Base (English) 33.5 16.07 9.77 5.66 32.92 14.85 8.52 4.74 94.64 93.44 93.19 93.04
mT5-Base (Arabic) 16.75 5.4 1.86 0.81 18.16 6.37 2.66 1.47 92.97 92.37 92.42 92.48

Baseline

To replicate our pipeline, please check Kaggle web page

You can access the repository via the following link:

GitHub Repository

Important Dates

Licence

Creative Commons Attribution-NonCommercial (CC BY-NC)

Contact

For questions use the following contact email:
Email: signforall@googlegroups.com

Organizers

Styled List